14 research outputs found

    Development of 3D Sculpted, Hyper-Realistic Biomimetic Eyes for Humanoid Robots and Medical Ocular Prostheses

    Get PDF
    Abstract. Hyper-realistic Humanoid Bio-robotic Systems (HHBS) are the precise electro-mechanical bodily emulation of natural human being in materiality, form and function. However, the standardised approach of constructing static ocular prosthesis for implementation in modern HHBS design, contradicts innate organic human criterion as the artificial eyes are void of the intricate dynamic fluidic functions of the natural human iris. The aim of this paper is to outline the development and construction process of a pair of realistic artificial humanistic optical retinal sensors that accurately simulate the autonomous fluctuating operations of the human iris in reaction to visceral emotion and photo-luminescent stimuli and retain the optical sensory capability and integral aesthetic materialism of the organic eye. The objective of the auto-dynamic pupillary framework is to advance the external expressive / embodied realism of HHBS towards achieving a more accurate operational and embodied simulation. Prospective future application and advancement of the outlined optical system presents potential implementation in the field of medical ocular prosthetic design, with the aim of enhancing the naturalistic operations of future fabricated human eye replicas, thus conceivably reducing the malaise and discomfiture commonly associated with the archetypical fixed artificial eyes

    Artificial Eyes with Emotion and Light Responsive Pupils for Realistic Humanoid Robots

    Get PDF
    This study employs a novel 3D engineered robotic eye system with dielectric elastomer actuator (DEA) pupils and a 3D sculpted and colourised gelatin iris membrane to replicate the appearance and materiality of the human eye. A camera system for facial expression analysis (FEA) was installed in the left eye, and a photo-resistor for measuring light frequencies in the right. Unlike previous prototypes, this configuration permits the robotic eyes to respond to both light and emotion proximal to a human eye. A series of experiments were undertaken using a pupil tracking headset to monitor test subjects when observing positive and negative video stimuli. A second test measured pupil dilation ranges to high and low light frequencies using a high-powered artificial light. This data was converted into a series of algorithms for servomotor triangulation to control the photosensitive and emotive pupil dilation sequences. The robotic eyes were evaluated against the pupillometric data and video feeds of the human eyes to determine operational accuracy. Finally, the dilating robotic eye system was installed in a realistic humanoid robot (RHR) and comparatively evaluated in a human-robot interaction (HRI) experiment. The results of this study show that the robotic eyes can emulate the average pupil reflex of the human eye under typical light conditions and to positive and negative emotive stimuli. However, the results of the HRI experiment indicate that replicating natural eye contact behaviour was more significant than emulating pupil dilation

    A Novel Speech to Mouth Articulation System for Realistic Humanoid Robots

    Get PDF
    A significant ongoing issue in realistic humanoid robotics (RHRs) is inaccurate speech to mouth synchronisation. Even the most advanced robotic systems cannot authentically emulate the natural movements of the human jaw, lips and tongue during verbal communication. These visual and functional irregularities have the potential to propagate the Uncanny Valley Effect (UVE) and reduce speech understanding in human-robot interaction (HRI). This paper outlines the development and testing of a novel Computer Aided Design (CAD) robotic mouth prototype with buccinator actuators for emulating the fluidic movements of the human mouth. The robotic mouth system incorporates a custom Machine Learning (ML) application that measures the acoustic qualities of speech synthesis (SS) and translates this data into servomotor triangulation for triggering jaw, lip and tongue positions. The objective of this study is to improve current robotic mouth design and provide engineers with a framework for increasing the authenticity, accuracy and communication capabilities of RHRs for HRI. The primary contributions of this study are the engineering of a robotic mouth prototype and the programming of a speech processing application that achieved a 79.4% syllable accuracy, 86.7% lip synchronisation accuracy and 0.1s speech to mouth articulation differential

    Modelling User Preference for Embodied Artificial Intelligence and Appearance in Realistic Humanoid Robots

    Get PDF
    Realistic humanoid robots (RHRs) with embodied artificial intelligence (EAI) have numerous applications in society as the human face is the most natural interface for communication and the human body the most effective form for traversing the manmade areas of the planet. Thus, developing RHRs with high degrees of human-likeness provides a life-like vessel for humans to physically and naturally interact with technology in a manner insurmountable to any other form of non-biological human emulation. This study outlines a human–robot interaction (HRI) experiment employing two automated RHRs with a contrasting appearance and personality. The selective sample group employed in this study is composed of 20 individuals, categorised by age and gender for a diverse statistical analysis. Galvanic skin response, facial expression analysis, and AI analytics permitted cross-analysis of biometric and AI data with participant testimonies to reify the results. This study concludes that younger test subjects preferred HRI with a younger-looking RHR and the more senior age group with an older looking RHR. Moreover, the female test group preferred HRI with an RHR with a younger appearance and male subjects with an older looking RHR. This research is useful for modelling the appearance and personality of RHRs with EAI for specific jobs such as care for the elderly and social companions for the young, isolated, and vulnerable

    The Multimodal Turing Test for Realistic Humanoid Robots with Embodied Artificial Intelligence

    Get PDF
    Alan Turing developed the Turing Test as a method to determine whether artificial intelligence (AI) can deceive human interrogators into believing it is sentient by competently answering questions at a confidence rate of 30%+. However, the Turing Test is concerned with natural language processing (NLP) and neglects the significance of appearance, communication and movement. The theoretical proposition at the core of this paper: ‘can machines emulate human beings?’ is concerned with both functionality and materiality. Many scholars consider the creation of a realistic humanoid robot (RHR) that is perceptually indistinguishable from a human as the apex of humanity’s technological capabilities. Nevertheless, no comprehensive development framework exists for engineers to achieve higher modes of human emulation, and no current evaluation method is nuanced enough to detect the causal effects of the Uncanny Valley (UV) effect. The Multimodal Turing Test (MTT) provides such a methodology and offers a foundation for creating higher levels of human likeness in RHRs for enhancing human-robot interaction (HRI

    A Novel Speech to Mouth Articulation System for Realistic Humanoid Robots

    Get PDF
    A signi�cant ongoing issue in realistic humanoid robotics (RHRs) is inaccurate speech to mouth synchronisation. Even the most advanced robotic systems cannot authentically emulate the natural movements of the human jaw, lips and tongue during verbal communication. These visual and functional irregularities have the potential to propagate the Uncanny Valley Effect (UVE) and reduce speech understanding in human-robot interaction (HRI). This paper outlines the development and testing of a novel Computer Aided Design (CAD) robotic mouth prototype with buccinator actuators for emulating the fluidic movements of the human mouth. The robotic mouth system incorporates a custom Machine Learning (ML) application that measures the acoustic qualities of speech synthesis (SS) and translates this data into servomotor triangulation for triggering jaw, lip and tongue positions. The objective of this study is to improve current robotic mouth design and provide engineers with a framework for increasing the authenticity, accuracy and communication capabilities of RHRs for HRI. The primary contributions of this study are the engineering of a robotic mouth prototype and the programming of a speech processing application that achieved a 79.4% syllable accuracy, 86.7% lip synchronisation accuracy and 0.1s speech to mouth articulation diferential

    Ambient Information Visualisation and Visitors' Technology Acceptance of Mixed Reality in Museums

    Get PDF
    The visualisation of historical information and storytelling in museums is a crucial process for transferring knowledge by directly and simplistically engaging the museum audience. Until recently, technological limitations meant museums were limited to 2D and 3D screen-based information displays. However, advancements in Mixed reality (MR) devices permit the propagation of a virtual overlay that amalgamates both real-world and virtual environments into a single spectrum. These holographical devices project a 3D space around the user which can be augmented with virtual artefacts, thus potentially changing the traditional museum visitor experience. Few research studies focus on utilising this virtual space to generate objects that do not visually inhibit or distract the operator. Therefore, this paper aims to introduce the Ambient Information Visualisation Concept (AIVC) as a new form of storytelling, which can enhance the communication and interactivity between museum visitors and exhibits by measuring and sustaining an optimum spatial environment around the user. Furthermore, this paper investigates the perceptual influences of AIVC on the users’ level of engagement in the museum. This research paper utilises the Microsoft HoloLens, which is one of the most cutting-edge imagining technologies available to date, in order to deploy the AIVC in a historical storytelling scene ‘The Battle’ in the Egyptian department at The Manchester Museum. This research further seeks to measure the user acceptance of the MR prototype by adopting the Technology Acceptance Model (TAM). The operational approaches investigated in this study include; personal innovativeness (PI), enjoyment (ENJ), usefulness (USF), ease of use (EOU) and willingness of future use (WFU). The population sampling methodology utilised 47 participants from the museum’s daily visitors. Results of this research indicate that the willingness of future usage construct is the primary outcome of this study, followed by the usefulness factor. Further findings conclude that the majority of users found this technology highly engaging and easy to use. The combination of the proposed system and AIVC in museum storytelling has extensive applications in museums, galleries and cultural heritage places to enhance the visitor experience. Keywords: Mixed Reality; Storytelling; Visitor Acceptance; Museum; HMDs; Ambient information visualisation; Microsoft HoloLen

    A Framework for Constructing and Evaluating the Role of MR as a Holographic Virtual Guide in Museums

    Get PDF
    Mixed reality (MR) is a cutting-edge technology at the forefront of many new applications in the tourism and cultural heritage sector. This study aims to reshape the museum experience by creating a highly engaging and immersive museum experience for visitors combing real-time visual, audio information and computer-generated images with museum artefacts and customer displays. This research introduces a theoretical framework that assesses the potential of MR guidance system in usefulness, ease of use, enjoyment, interactivity, touring and future applications. The evaluation introduces the MuseumEye MR application in the Egyptian Museum, Cairo using mixed method surveys and a sample of 171 participants. The results of the questionnaire highlighted the importance of the mediating the role of the tour guide in enhancing the relationship between perceived usefulness, ease of use, multimedia, UI design, interactivity and the intention of use. Furthermore, the results of this study revealed the potential future use of MR in museums and ensured sustainability and engagement past the traditional visitor museum experience, which heightens the economic state of museums and cultural heritage sectors

    Modelling User Preference for Embodied Artificial Intelligence and Appearance in Realistic Humanoid Robots

    No full text
    Realistic humanoid robots (RHRs) with embodied artificial intelligence (EAI) have numerous applications in society as the human face is the most natural interface for communication and the human body the most effective form for traversing the manmade areas of the planet. Thus, developing RHRs with high degrees of human-likeness provides a life-like vessel for humans to physically and naturally interact with technology in a manner insurmountable to any other form of non-biological human emulation. This study outlines a human–robot interaction (HRI) experiment employing two automated RHRs with a contrasting appearance and personality. The selective sample group employed in this study is composed of 20 individuals, categorised by age and gender for a diverse statistical analysis. Galvanic skin response, facial expression analysis, and AI analytics permitted cross-analysis of biometric and AI data with participant testimonies to reify the results. This study concludes that younger test subjects preferred HRI with a younger-looking RHR and the more senior age group with an older looking RHR. Moreover, the female test group preferred HRI with an RHR with a younger appearance and male subjects with an older looking RHR. This research is useful for modelling the appearance and personality of RHRs with EAI for specific jobs such as care for the elderly and social companions for the young, isolated, and vulnerable
    corecore